Goto

Collaborating Authors

 thinking fast


Trust, Governance, and AI Decision Making

Communications of the ACM

IBM's Global Leader on Responsible AI and AI Governance, Francesca Rossi, arrived at her current area of focus after a 2014 sabbatical at the Harvard Radcliffe Institute, which inspired her to think beyond her training as an academic researcher and incorporate both humanistic and technological perspectives into the development of AI systems. In the intervening years, she helped build IBM's internal AI Ethics Board and foster external partnerships to shape best practices for responsible AI. Here, we talk about trust, governance, and what these issues have to do with AI decision making. The ethical issues around the use of AI evolved with the technology's capabilities. Traditional machine learning approaches introduced issues like fairness, explainability, privacy, transparency, and so on.


Thinking Fast and Slow with Deep Learning and Tree Search

Neural Information Processing Systems

Sequential decision making problems, such as structured prediction, robotic control, and game playing, require a combination of planning policies and generalisation of those plans.


Thinking Fast and Slow in Human and Machine Intelligence

Communications of the ACM

Human intelligence has generally been studied by focusing on two primary levels: cognitive science, which examines the mind, and neuroscience, which focuses on the brain. Both approaches have influenced artificial intelligence (AI) research, leading to the development of various cognitive architectures with emergent behaviors.23 In this article, we propose an approach inspired by human cognition, specifically drawing on cognitive theories about human reasoning and decision making. We are inspired by the book Thinking, Fast and Slow by Daniel Kahneman,20 which categorizes human thought processes into two systems: System 1 (fast thinking) and System 2 (slow thinking).37 System 1, or "thinking fast," is responsible for intuitive, quick, and often unconscious decisions.


Reviews: Thinking Fast and Slow with Deep Learning and Tree Search

Neural Information Processing Systems

SUMMARY: The paper proposes an algorithm that combines imitation learning with tree search, which results in an apprentice learning from an ever-improving expert. A DQN is trained to learn a policy derived from an MCTS agent, with the DQN providing generalisation to unseen states. It is also then used as feedback to improve the expert, which can then be used to retrain the DQN, and so on. The paper makes two contributions: (1) a new target for imitation learning, which is empirically shown to outperform a previously-suggested target and which results in the apprentice learning a policy of equal strength to the expert. COMMENTS: I found the paper generally well-written, clear and easy to follow, barring Section 6.


Biochemical Prostate Cancer Recurrence Prediction: Thinking Fast & Slow

You, Suhang, Adap, Sanyukta, Thakur, Siddhesh, Baheti, Bhakti, Bakas, Spyridon

arXiv.org Artificial Intelligence

Time to biochemical recurrence in prostate cancer is essential for prognostic monitoring of the progression of patients after prostatectomy, which assesses the efficacy of the surgery. In this work, we proposed to leverage multiple instance learning through a two-stage ``thinking fast \& slow'' strategy for the time to recurrence (TTR) prediction. The first (``thinking fast'') stage finds the most relevant WSI area for biochemical recurrence and the second (``thinking slow'') stage leverages higher resolution patches to predict TTR. Our approach reveals a mean C-index ($Ci$) of 0.733 ($\theta=0.059$) on our internal validation and $Ci=0.603$ on the LEOPARD challenge validation set. Post hoc attention visualization shows that the most attentive area contributes to the TTR prediction.


Thinking Fast and Slow in Large Language Models

Hagendorff, Thilo, Fabi, Sarah, Kosinski, Michal

arXiv.org Artificial Intelligence

Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Therefore, it is of great importance to evaluate their emerging abilities. In this study, we show that LLMs like GPT-3 exhibit behavior that strikingly resembles human-like intuition - and the cognitive errors that come with it. However, LLMs with higher cognitive capabilities, in particular ChatGPT and GPT-4, learned to avoid succumbing to these errors and perform in a hyperrational manner. For our experiments, we probe LLMs with the Cognitive Reflection Test (CRT) as well as semantic illusions that were originally designed to investigate intuitive decision-making in humans. Our study demonstrates that investigating LLMs with methods from psychology has the potential to reveal otherwise unknown emergent traits.


Fast and Slow Planning

Fabiano, Francesco, Pallagani, Vishal, Ganapini, Marianna Bergamaschi, Horesh, Lior, Loreggia, Andrea, Murugesan, Keerthiram, Rossi, Francesca, Srivastava, Biplav

arXiv.org Artificial Intelligence

The concept of Artificial Intelligence has gained a lot of attention over the last decade. In particular, AI-based tools have been employed in several scenarios and are, by now, pervading our everyday life. Nonetheless, most of these systems lack many capabilities that we would naturally consider to be included in a notion of "intelligence". In this work, we present an architecture that, inspired by the cognitive theory known as Thinking Fast and Slow by D. Kahneman, is tasked with solving planning problems in different settings, specifically: classical and multi-agent epistemic. The system proposed is an instance of a more general AI paradigm, referred to as SOFAI (for Slow and Fast AI). SOFAI exploits multiple solving approaches, with different capabilities that characterize them as either fast or slow, and a metacognitive module to regulate them. This combination of components, which roughly reflects the human reasoning process according to D. Kahneman, allowed us to enhance the reasoning process that, in this case, is concerned with planning in two different settings. The behavior of this system is then compared to state-of-the-art solvers, showing that the newly introduced system presents better results in terms of generality, solving a wider set of problems with an acceptable trade-off between solving times and solution accuracy.



Thinking fast and slow with AI

#artificialintelligence

Acclaimed economist Daniel Kahneman in his book Thinking Fast and Slow, proposed that the human brain has two separate systems for decision making – system 1 and system 2. System 1 controls unconscious decision making like walking, climbing stairs, brushing teeth, and other tasks that may not require conscious thought. On the other hand, system 2 is a slower and more deliberative type of decision-making system for tasks like playing chess, solving mathematical equations, etc. Now, what if we apply the same principle to artificial intelligence systems? Notably, this is not an entirely new concept. AI pioneer Yoshua Bengio spoke about it at length on multiple occasions.


Thinking Fast and Slow in AI: the Role of Metacognition

Ganapini, Marianna Bergamaschi, Campbell, Murray, Fabiano, Francesco, Horesh, Lior, Lenchner, Jon, Loreggia, Andrea, Mattei, Nicholas, Rossi, Francesca, Srivastava, Biplav, Venable, Kristen Brent

arXiv.org Artificial Intelligence

AI systems have seen dramatic advancement in recent years, bringing many applications that pervade our everyday life. However, we are still mostly seeing instances of narrow AI: many of these recent developments are typically focused on a very limited set of competencies and goals, e.g., image interpretation, natural language processing, classification, prediction, and many others. Moreover, while these successes can be accredited to improved algorithms and techniques, they are also tightly linked to the availability of huge datasets and computational power. State-of-the-art AI still lacks many capabilities that would naturally be included in a notion of (human) intelligence. We argue that a better study of the mechanisms that allow humans to have these capabilities can help us understand how to imbue AI systems with these competencies. We focus especially on D. Kahneman's theory of thinking fast and slow, and we propose a multi-agent AI architecture where incoming problems are solved by either system 1 (or "fast") agents, that react by exploiting only past experience, or by system 2 (or "slow") agents, that are deliberately activated when there is the need to reason and search for optimal solutions beyond what is expected from the system 1 agent. Both kinds of agents are supported by a model of the world, containing domain knowledge about the environment, and a model of "self", containing information about past actions of the system and solvers' skills.